30 research outputs found

    Photometric stereo for 3D face reconstruction using non-linear illumination models

    Get PDF
    Face recognition in presence of illumination changes, variant pose and different facial expressions is a challenging problem. In this paper, a method for 3D face reconstruction using photometric stereo and without knowing the illumination directions and facial expression is proposed in order to achieve improvement in face recognition. A dimensionality reduction method was introduced to represent the face deformations due to illumination variations and self shadows in a lower space. The obtained mapping function was used to determine the illumination direction of each input image and that direction was used to apply photometric stereo. Experiments with faces were performed in order to evaluate the performance of the proposed scheme. From the experiments it was shown that the proposed approach results very accurate 3D surfaces without knowing the light directions and with a very small differences compared to the case of known directions. As a result the proposed approach is more general and requires less restrictions enabling 3D face recognition methods to operate with less data

    General Automatic Human Shape and Motion Capture Using Volumetric Contour Cues

    Get PDF
    Markerless motion capture algorithms require a 3D body with properly personalized skeleton dimension and/or body shape and appearance to successfully track a person. Unfortunately, many tracking methods consider model personalization a different problem and use manual or semi-automatic model initialization, which greatly reduces applicability. In this paper, we propose a fully automatic algorithm that jointly creates a rigged actor model commonly used for animation - skeleton, volumetric shape, appearance, and optionally a body surface - and estimates the actor's motion from multi-view video input only. The approach is rigorously designed to work on footage of general outdoor scenes recorded with very few cameras and without background subtraction. Our method uses a new image formation model with analytic visibility and analytically differentiable alignment energy. For reconstruction, 3D body shape is approximated as Gaussian density field. For pose and shape estimation, we minimize a new edge-based alignment energy inspired by volume raycasting in an absorbing medium. We further propose a new statistical human body model that represents the body surface, volumetric Gaussian density, as well as variability in skeleton shape. Given any multi-view sequence, our method jointly optimizes the pose and shape parameters of this model fully automatically in a spatiotemporal way

    Kinematic synthesis of avatar skeletons from visual data

    No full text
    The recovery of 3D models from visual data generally results in a geometric skeleton that is a simplification of the shape of the captured figure, called an avatar. In this paper, we introduce a method called hierarchical kinematic synthesis that identifies an articulated skeleton, called kinematic skeleton, which provides a compact representation of the movement of the tracked subject. In this work, the kinematic skeleton of a human is computed from a finite number of key poses captured from full body articulated movements of an arbitrary subject, and provides location of joints, length and twist angle of links that form the limbs of the 3D avatar. We use an approximation to the human skeleton which consists of five serial chains constructed from revolute and spherical joints. To recover the kinematic skeleton, a hierarchical approximate finite-position synthesis methodology determines the dimensions of these chains limb by limb. We show that this technique effectively recovers the kinematic skeleton for several synthetically generated datasets, and that the identification of the kinematic skeleton improves pose estimation for 3D data while simplifying the generation of avatar movement
    corecore